-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Fix map_query_sql benchmark duplicate key error #18427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix map_query_sql benchmark duplicate key error #18427
Conversation
The build_keys() function was generating 1000 random keys from range 0..9999,
which could result in duplicate keys due to the birthday paradox. The map()
function requires unique keys, causing the benchmark to fail with:
'Execution("map key must be unique, duplicate key found: {key}")'
This fix ensures all generated keys are unique by:
- Using a HashSet to track seen keys
- Only adding keys to the result if they haven't been seen before
- Continuing to generate until exactly 1000 unique keys are produced
Fixes apache#18421
|
LGTM. |
Jefffrey
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like need to resolve some conflicts
| let mut keys = vec![]; | ||
| for _ in 0..1000 { | ||
| keys.push(rng.random_range(0..9999).to_string()); | ||
| let mut seen = HashSet::with_capacity(1000); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also make keys a HashSet and just keep inserting into it until it reaches 1000 instead of having both keys and seen
|
Took the liberty of pushing some commits to get this PR over the line |
|
Thanks @atheendre130505 for initiating this |
Fix map_query_sql benchmark duplicate key error
Description
The build_keys() function was generating 1000 random keys from range
0..9999, which could result in duplicate keys due to the birthday
paradox. The map() function requires unique keys, causing the benchmark
to fail with: Execution("map key must be unique, duplicate key found:
{key}")
This fix ensures all generated keys are unique by:
Using a HashSet to track seen keys
Only adding keys to the result if they haven't been seen before
Continuing to generate until exactly 1000 unique keys are produced
Fixes apache#18421
Which issue does this PR close?
Closes apache#18421
Rationale for this change
The benchmark was non-deterministic: it could pass or fail depending on
random key generation. With 1000 keys from a range of 9999 values,
collisions are likely (~50% chance), making the benchmark unreliable.
This change ensures uniqueness so the benchmark consistently succeeds
and accurately measures map function performance.
What changes are included in this PR?
Added use std::collections::HashSet; import
Modified build_keys() to:
Track generated keys using a HashSet
Only add keys if they are unique
Continue generating until exactly 1000 unique keys are produced
File changed: datafusion/core/benches/map_query_sql.rs
Code changes:
Added HashSet import at the top of the file
Replaced simple loop with uniqueness-checking logic in build_keys()
function
Are these changes tested?
The fix was verified by:
Logic review: the HashSet approach guarantees uniqueness
Code review: changes follow Rust best practices
No linter errors
The benchmark itself serves as the test — running cargo bench -p
datafusion --bench map_query_sql should now complete without errors.
Before this fix, the benchmark would fail with duplicate key errors in a
significant portion of runs.
Are there any user-facing changes?
No user-facing changes. This is an internal benchmark fix that ensures
the map_query_sql benchmark runs reliably. It does not affect the public
API or any runtime behavior of DataFusion.
---------
Co-authored-by: Jefffrey <[email protected]>
Fix map_query_sql benchmark duplicate key error
Description
The build_keys() function was generating 1000 random keys from range 0..9999, which could result in duplicate keys due to the birthday paradox. The map() function requires unique keys, causing the benchmark to fail with: Execution("map key must be unique, duplicate key found: {key}")
This fix ensures all generated keys are unique by:
Using a HashSet to track seen keys
Only adding keys to the result if they haven't been seen before
Continuing to generate until exactly 1000 unique keys are produced
Fixes #18421
Which issue does this PR close?
Closes #18421
Rationale for this change
The benchmark was non-deterministic: it could pass or fail depending on random key generation. With 1000 keys from a range of 9999 values, collisions are likely (~50% chance), making the benchmark unreliable. This change ensures uniqueness so the benchmark consistently succeeds and accurately measures map function performance.
What changes are included in this PR?
Added use std::collections::HashSet; import
Modified build_keys() to:
Track generated keys using a HashSet
Only add keys if they are unique
Continue generating until exactly 1000 unique keys are produced
File changed: datafusion/core/benches/map_query_sql.rs
Code changes:
Added HashSet import at the top of the file
Replaced simple loop with uniqueness-checking logic in build_keys() function
Are these changes tested?
The fix was verified by:
Logic review: the HashSet approach guarantees uniqueness
Code review: changes follow Rust best practices
No linter errors
The benchmark itself serves as the test — running cargo bench -p datafusion --bench map_query_sql should now complete without errors. Before this fix, the benchmark would fail with duplicate key errors in a significant portion of runs.
Are there any user-facing changes?
No user-facing changes. This is an internal benchmark fix that ensures the map_query_sql benchmark runs reliably. It does not affect the public API or any runtime behavior of DataFusion.